Advanced Lane Detection

Pipeline of this project:

1) Compute the camera caliberation matrix and distortion coefficient from chessboard images.
2) Apply distortion correction to raw images.
3) Use color gradient to create binary threshholded image.
4) Apply perspective transform to binary threshholded image to get top view.
5) Detect pixel lane and fit to find lane boundary.
6) Determine lane curvature and vehicle position wrt centre.
7) Warp the detected boundaries back to original image.

In [1]:
%matplotlib inline
%reload_ext autoreload
%autoreload 2
In [2]:
import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import glob
import os
import time

from moviepy.editor import VideoFileClip
from IPython.display import HTML

Camera Caliberation

In [3]:
CAL_IMGS = "camera_cal"
In [4]:
calib_files = os.listdir(CAL_IMGS)
assert(len(calib_files) > 0)
In [5]:
def draw_imgs(lst, rows, cols=2, figsize=(10, 25), dosave= False, save_dir=""):
    assert(len(lst) > 0)
    assert(rows > 0)
    if dosave:
        assert(os.path.exists(save_dir))
    fig = plt.figure(figsize=figsize)
    fig.tight_layout()
    for i in range(1, rows * cols +1):
        fig.add_subplot(rows, cols, i)
        img = mpimg.imread(CAL_IMGS + "/"+calib_files[i-1])
        plt.imshow(img)
    plt.show()
    fig.savefig(save_dir + "/op_" + str(time.time()) + ".jpg")
In [6]:
def create_dir(dir_name):
    if not os.path.exists(dir_name):
        os.makedirs(dir_name)
In [7]:
# Create directory to save output directory
OUTDIR = "output_images/"
create_dir(OUTDIR)
In [8]:
# Just checking the image
draw_imgs(calib_files, len(calib_files)//2, dosave=True, save_dir=OUTDIR)

Caliberation

As can be seen in above images there are 9 corners in rows and 6 corners in columns. Lets go ahead and find corners.
There are 3 images for which corners = 9 * 6 doesn't work. But 17 images are enough for caliberation

In [9]:
nx = 9
ny = 6

objp = np.zeros((ny * nx, 3), np.float32)
objp[:,:2] = np.mgrid[:nx, :ny].T.reshape(-1, 2)

objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.

failed =[]

for idx, name in enumerate(calib_files):
    img = cv2.imread(CAL_IMGS + "/"+ name)
    
    # Convert to grayscale
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
    # Find the chessboard corners
    ret, corners = cv2.findChessboardCorners(gray, (nx, ny), None)
    
    if ret == True:
        objpoints.append(objp)
        imgpoints.append(corners)
        
        # Draw and display the corners
        cv2.drawChessboardCorners(img, (nx, ny), corners, ret)
        f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,8))
        f.tight_layout()
        ax1.imshow(cv2.cvtColor(cv2.imread(CAL_IMGS + "/"+ name), cv2.COLOR_BGR2RGB))
        ax1.set_title("Original:: " + name, fontsize=18)
        ax2.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))
        ax2.set_title("Corners:: "+ name, fontsize=18)
        f.savefig(OUTDIR + "/op_" + str(time.time()) + ".jpg")
        
    else:
        failed.append(name)
        
print("Failed for images: [")
print(failed)
print("]")
Failed for images: [
['calibration4.jpg', 'calibration1.jpg', 'calibration5.jpg']
]

Distortion correction

Using object and image points calculated in step 1 to caliberate the camera and compute the camera matrix and distortion coefficients.
Then use these camera matrix and distortion cofficients to undistort images

In [10]:
def undistort(img_name, objpoints, imgpoints):
    img = cv2.imread(img_name)
    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img.shape[1:], None, None)
    undist= cv2.undistort(img, mtx, dist, None, mtx)
    return undist
In [11]:
def undistort_no_read(img, objpoints, imgpoints):
    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img.shape[1:], None, None)
    undist= cv2.undistort(img, mtx, dist, None, mtx)
    return undist
In [12]:
undist = undistort(CAL_IMGS+"/calibration17.jpg", objpoints, imgpoints)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,8))
f.tight_layout()
ax1.imshow(cv2.cvtColor(cv2.imread(CAL_IMGS+"/calibration10.jpg"), cv2.COLOR_BGR2RGB))
ax1.set_title("Original:: calibration10.jpg" , fontsize=18)
ax2.imshow(cv2.cvtColor(undist,cv2.COLOR_BGR2RGB))
ax2.set_title("Undistorted:: calibration10.jpg", fontsize=18)
f.savefig(OUTDIR + "/op_" + str(time.time()) + ".jpg")
In [13]:
images = glob.glob('test_images/test*.jpg')
for image in images:
    undist = undistort(image, objpoints, imgpoints)
    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,8))
    f.tight_layout()
    ax1.imshow(cv2.cvtColor(cv2.imread(image), cv2.COLOR_BGR2RGB))
    ax1.set_title("Original:: " + image , fontsize=18)
    ax2.imshow(cv2.cvtColor(undist, cv2.COLOR_BGR2RGB))
    ax2.set_title("Undistorted:: "+ image, fontsize=18)
    f.savefig(OUTDIR + "/op_" + str(time.time()) + ".jpg")

Gradient and color transform

We'll use sobel filter in both x and y direction to get gradient change in both axes to generate binary threshhold image.
We'll also use color space HLS to get color transformed binary threshold image.
We'll combine both these outputs to get final binary threshold image.

In [14]:
def abs_thresh(img, sobel_kernel=3, mag_thresh=(0,255), return_grad= False, direction ='x'):
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    grad = None
    scaled_sobel = None
    
    # Sobel x
    if direction.lower() == 'x':
        grad = cv2.Sobel(gray, cv2.CV_64F, 1, 0,ksize=sobel_kernel) # Take the derivative in x       
    # Sobel y
    else:
        grad = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel) # Take the derivative in y
        
    if return_grad == True:
        return grad
        
    abs_sobel = np.absolute(grad) # Absolute x derivative to accentuate lines away from horizontal
    scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))

    grad_binary = np.zeros_like(scaled_sobel)
    grad_binary[(scaled_sobel >= mag_thresh[0]) & (scaled_sobel < mag_thresh[1])] = 1
    
    return grad_binary
In [15]:
def mag_threshold(img, sobel_kernel=3, mag_thresh=(0, 255)):    
    xgrad =  abs_thresh(img, sobel_kernel=sobel_kernel, mag_thresh=mag_thresh, return_grad=True)
    ygrad =  abs_thresh(img, sobel_kernel=sobel_kernel, mag_thresh=mag_thresh, return_grad=True, direction='y')
    
    magnitude = np.sqrt(np.square(xgrad)+np.square(ygrad))
    abs_magnitude = np.absolute(magnitude)
    scaled_magnitude = np.uint8(255*abs_magnitude/np.max(abs_magnitude))
    mag_binary = np.zeros_like(scaled_magnitude)
    mag_binary[(scaled_magnitude >= mag_thresh[0]) & (scaled_magnitude < mag_thresh[1])] = 1
    
    return mag_binary
In [16]:
def dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)):
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    
    xgrad =  cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
    ygrad =  cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
    
    xabs = np.absolute(xgrad)
    yabs = np.absolute(ygrad)
    
    grad_dir = np.arctan2(yabs, xabs)
    
    binary_output = np.zeros_like(grad_dir).astype(np.uint8)
    binary_output[(grad_dir >= thresh[0]) & (grad_dir < thresh[1])] = 1
    return binary_output
In [17]:
def get_hls_thresh_img(img, thresh=(0, 255)):
    hls_img= cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
    S = hls_img[:, :, 2]

    binary_output = np.zeros_like(S).astype(np.uint8)    
    binary_output[(S >= thresh[0]) & (S < thresh[1])] = 1
    
    return binary_output
In [18]:
# Testing the threshholding
kernel_size = 7
mag_thresh = (40, 100)
dir_thresh = (0.7, 1.4)
color_thresh = (150, 255)
for image_name in images:
    img = undistort(image_name, objpoints, imgpoints)
    
    xabs_bin = abs_thresh(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh)
    yabs_bin = abs_thresh(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh, direction='y')
    magn_bin = mag_threshold(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh)
    dirn_bin = dir_threshold(img, sobel_kernel=15, thresh=dir_thresh)
    
    # Combine all gradient thresholds
    combined = np.zeros_like(dirn_bin)
    combined[((xabs_bin ==1) & (yabs_bin == 1)) | ((magn_bin ==1) & (dirn_bin == 1)) ] = 1
    
    # Apply color threshold
    color_bin = get_hls_thresh_img(img, thresh=color_thresh)
    
    # combine all
    combined_binary = np.zeros_like(combined)
    combined_binary[(color_bin == 1) | (combined == 1)] = 1
    
    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,8))
    f.tight_layout()
    ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
    ax1.set_title("Original:: " + image , fontsize=18)
    ax2.imshow(combined_binary, cmap='gray')
    ax2.set_title("Threshold Binary:: "+ image, fontsize=18)
    f.savefig(OUTDIR + "/op_" + str(time.time()) + ".jpg")

Perspective Transform

Perspective transform maps the points in given image to different perspective.
We are here looking for bird's eye view of the road
This will be helpful in finding lane curvature.
Note that after perspective transform the lanes should apear aproximately parallel

In [20]:
def transform_image(img, offset=250, src=None, dst=None):    
    img_size = (img.shape[1], img.shape[0])
    
    out_img_orig = np.copy(img)
       
    leftupper  = (595, 460)
    rightupper = (715, 460)
    leftlower  = (210, img.shape[0])
    rightlower = (1120, img.shape[0])
    
    warped_leftupper = (offset,0)
    warped_rightupper = (offset, img.shape[0])
    warped_leftlower = (1050, 0)
    warped_rightlower = (1050, img.shape[0])
    
    color_r = [0, 0, 255]
    color_g = [0, 255, 0]
    line_width = 5
    
    if src is not None:
        src = src
    else:
        src = np.float32([leftupper, leftlower, rightupper, rightlower])
        
    if dst is not None:
        dst = dst
    else:
        dst = np.float32([warped_leftupper, warped_rightupper, warped_leftlower, warped_rightlower])
    
    cv2.line(out_img_orig, leftlower, leftupper, color_r, line_width)
    cv2.line(out_img_orig, leftlower, rightlower, color_r , line_width * 2)
    cv2.line(out_img_orig, rightupper, rightlower, color_r, line_width)
    cv2.line(out_img_orig, rightupper, leftupper, color_g, line_width)
    
    # calculate the perspective transform matrix
    M = cv2.getPerspectiveTransform(src, dst)
    minv = cv2.getPerspectiveTransform(dst, src)
    
    # Warp the image
    warped = cv2.warpPerspective(img, M, img_size, cv2.INTER_NEAREST)
    out_warped_img = np.copy(warped)
    
    cv2.line(out_warped_img, warped_rightupper, warped_leftupper, color_r, line_width)
    cv2.line(out_warped_img, warped_rightupper, warped_rightlower, color_r , line_width * 2)
    cv2.line(out_warped_img, warped_leftlower, warped_rightlower, color_r, line_width)
    cv2.line(out_warped_img, warped_leftlower, warped_leftupper, color_g, line_width)
    
    return warped, M, minv, out_img_orig, out_warped_img
In [21]:
for image in images:
    img = cv2.imread(image)
    warped, M, minv, out_img_orig, out_warped_img = transform_image(img)
    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,8))
    f.tight_layout()
    ax1.imshow(cv2.cvtColor(out_img_orig, cv2.COLOR_BGR2RGB))
    ax1.set_title("Original:: " + image , fontsize=18)
    ax2.imshow(cv2.cvtColor(out_warped_img, cv2.COLOR_BGR2RGB))
    ax2.set_title("Warped:: "+ image, fontsize=18)
    f.savefig(OUTDIR + "/op_" + str(time.time()) + ".jpg")
In [22]:
for image in images:
    img = undistort(image, objpoints, imgpoints)
    
    xabs_bin = abs_thresh(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh)
    yabs_bin = abs_thresh(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh, direction='y')
    magn_bin = mag_threshold(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh)
    dirn_bin = dir_threshold(img, sobel_kernel=15, thresh=dir_thresh)
    
    # Combine all gradient thresholds
    combined = np.zeros_like(dirn_bin)
    combined[((xabs_bin ==1) & (yabs_bin == 1)) | ((magn_bin ==1) & (dirn_bin == 1)) ] = 1
    
    # Apply color threshold
    color_bin = get_hls_thresh_img(img, thresh=color_thresh)
    
    # combine all
    combined_binary = np.zeros_like(combined)
    combined_binary[(color_bin == 1) | (combined == 1)] = 1
    
    warped, warp_matrix, unwarp_matrix, out_img_orig, out_warped_img = transform_image(combined_binary)
    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,8))
    f.tight_layout()
    ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
    ax1.set_title("Original:: " + image , fontsize=18)
    ax2.imshow(warped, cmap='gray')
    ax2.set_title("Transformed:: "+ image, fontsize=18)
    f.savefig(OUTDIR + "/op_" + str(time.time()) + ".jpg")

Lane line pixel detection and polynomial fitting

With binary image where lane lines are clearly visible, now we have to decide lane pixels
Also we need to decide pixels from left lane and pixels from right lane.

The threshold image pixels are either 0 or 1, so if we take histogram of the image
the 2 peaks that we might see in histogram might be good position to start to find lane pixels
We can then use sliding window to find further pixels

In [23]:
def find_lines(warped_img, nwindows=9, margin=100, minpix=50):
    
    # Take a histogram of the bottom half of the image
    histogram = np.sum(warped_img[warped_img.shape[0]//2:,:], axis=0)
        
    # Create an output image to draw on and visualize the result
    out_img = np.dstack((warped_img, warped_img, warped_img)) * 255
    
    # Find the peak of the left and right halves of the histogram
    # These will be the starting point for the left and right lines
    midpoint = np.int(histogram.shape[0]//2)
    leftx_base = np.argmax(histogram[:midpoint])
    rightx_base = np.argmax(histogram[midpoint:]) + midpoint

    # Set height of windows - based on nwindows above and image shape
    window_height = np.int(warped_img.shape[0]//nwindows)
    
    # Identify the x and y positions of all nonzero pixels in the image
    nonzero = warped_img.nonzero()
    nonzeroy = np.array(nonzero[0])
    nonzerox = np.array(nonzero[1])
    
    # Current positions to be updated later for each window in nwindows
    leftx_current = leftx_base
    rightx_current = rightx_base

    # Create empty lists to receive left and right lane pixel indices
    left_lane_inds = []
    right_lane_inds = []

    # Step through the windows one by one
    for window in range(nwindows):
        # Identify window boundaries in x and y (and right and left)
        win_y_low = warped_img.shape[0] - (window+1)*window_height
        win_y_high = warped_img.shape[0] - window*window_height
        
        ### Find the four below boundaries of the window ###
        win_xleft_low = leftx_current - margin  
        win_xleft_high = leftx_current + margin  
        win_xright_low =  rightx_current - margin 
        win_xright_high = rightx_current + margin  
        
        # Draw the windows on the visualization image
        cv2.rectangle(out_img,(win_xleft_low,win_y_low), (win_xleft_high,win_y_high),(0,255,0), 2) 
        cv2.rectangle(out_img,(win_xright_low,win_y_low), (win_xright_high,win_y_high),(0,255,0), 2) 
        
        ### Identify the nonzero pixels in x and y within the window ###
        good_left_inds = ((nonzeroy >= win_y_low ) & (nonzeroy < win_y_high) &\
                            (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
        good_right_inds = ((nonzeroy >= win_y_low ) & (nonzeroy < win_y_high) &\
                            (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
        
        # Append these indices to the lists
        left_lane_inds.append(good_left_inds)
        right_lane_inds.append(good_right_inds)
        
        ### If you found > minpix pixels, recenter next window ###
        ### (`right` or `leftx_current`) on their mean position ###
        if len(good_left_inds) > minpix:
            leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
        if len(good_right_inds) > minpix:
            rightx_current = np.int(np.mean(nonzerox[good_right_inds]))

    # Concatenate the arrays of indices (previously was a list of lists of pixels)
    try:
        left_lane_inds = np.concatenate(left_lane_inds)
        right_lane_inds = np.concatenate(right_lane_inds)
    except ValueError:
        # Avoids an error if the above is not implemented fully
        pass

    # Extract left and right line pixel positions
    leftx = nonzerox[left_lane_inds]
    lefty = nonzeroy[left_lane_inds] 
    rightx = nonzerox[right_lane_inds]
    righty = nonzeroy[right_lane_inds]

    return leftx, lefty, rightx, righty, left_lane_inds, right_lane_inds, out_img

def fit_polynomial(binary_warped, nwindows=9, margin=100, minpix=50, show=True):
    # Find our lane pixels first
    leftx, lefty, rightx, righty, left_lane_inds, right_lane_inds, out_img \
        = find_lines(binary_warped, nwindows=nwindows, margin=margin, minpix=minpix)

    left_fit = np.polyfit(lefty, leftx, 2)
    right_fit = np.polyfit(righty, rightx, 2)

    # Generate x and y values for plotting
    ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
    try:
        left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
        right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
    except TypeError:
        # Avoids an error if `left` and `right_fit` are still none or incorrect
        print('The function failed to fit a line!')
        left_fitx = 1*ploty**2 + 1*ploty
        right_fitx = 1*ploty**2 + 1*ploty

    # Colors in the left and right lane regions
    out_img[lefty, leftx] = [255, 0, 0]
    out_img[righty, rightx] = [0, 0, 255]

    # Plots the left and right polynomials on the lane lines
    if show == True:
        plt.plot(left_fitx, ploty, color='yellow')
        plt.plot(right_fitx, ploty, color='yellow')

    return left_fit, right_fit, left_fitx, right_fitx, left_lane_inds, right_lane_inds, out_img

Skip the sliding windows step once you've found the lines

Once lines are found, we don't need to do blind search , but we can search around existing line with some margin. As the lanes are not going to shift much between 2 frames of video

In [24]:
def search_around_poly(binary_warped, left_fit, right_fit, ymtr_per_pixel, xmtr_per_pixel, margin=100):
    nonzero = binary_warped.nonzero()
    nonzeroy = np.array(nonzero[0])
    nonzerox = np.array(nonzero[1])
    
    left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + 
                    left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) + 
                    left_fit[1]*nonzeroy + left_fit[2] + margin)))
    right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + 
                    right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) + 
                    right_fit[1]*nonzeroy + right_fit[2] + margin)))
    
    # Again, extract left and right line pixel positions
    leftx = nonzerox[left_lane_inds]
    lefty = nonzeroy[left_lane_inds] 
    rightx = nonzerox[right_lane_inds]
    righty = nonzeroy[right_lane_inds]
    
    # Fit a second order polynomial to each using `np.polyfit`
    left_fit = np.polyfit(lefty, leftx, 2)
    right_fit = np.polyfit(righty, rightx, 2)

    # Fit second order polynomial to for for points on real world   
    left_lane_indices = np.polyfit(lefty*ymtr_per_pixel, leftx*xmtr_per_pixel, 2)
    right_lane_indices = np.polyfit(righty*ymtr_per_pixel, rightx*xmtr_per_pixel, 2)
    
    return left_fit, right_fit, left_lane_indices, right_lane_indices
In [25]:
left_fit, right_fit, left_fitx, right_fitx, left_lane_indices, right_lane_indices, out_img = fit_polynomial(warped, nwindows=20)
plt.imshow(out_img)
plt.savefig(OUTDIR + "/op_" + str(time.time()) + ".jpg")

Radius of curvature

We can fit the circle that can approximately fits the nearby points locally

alt text

The radius of curvature is radius of the circle that fits the curve
The radius of curvature can be found out using equation:

alt text
For polynomial below are the equation:
alt text

In [26]:
def radius_curvature(img, left_fit_cr, right_fit_cr, xmtr_per_pixel, ymtr_per_pixel):
    ploty = np.linspace(0, img.shape[0] - 1, img.shape[0])
    y_eval = np.max(ploty)
    
    # find radii of curvature
    left_rad = ((1 + (2*left_fit_cr[0]*y_eval*ymtr_per_pixel + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
    right_rad = ((1 + (2*right_fit_cr[0]*y_eval*ymtr_per_pixel + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
    
    return (left_rad, right_rad)
In [27]:
def dist_from_center(img, left_fit, right_fit, xmtr_per_pixel, ymtr_per_pixel):
    ## Image mid horizontal position 
    #xmax = img.shape[1]*xmtr_per_pixel
    ymax = img.shape[0]*ymtr_per_pixel
    
    center = img.shape[1] / 2
    
    lineLeft = left_fit[0]*ymax**2 + left_fit[1]*ymax + left_fit[2]
    lineRight = right_fit[0]*ymax**2 + right_fit[1]*ymax + right_fit[2]
    
    mid = lineLeft + (lineRight - lineLeft)/2
    dist = (mid - center) * xmtr_per_pixel
    if dist >= 0. :
        message = 'Vehicle location: {:.2f} m right'.format(dist)
    else:
        message = 'Vehicle location: {:.2f} m left'.format(abs(dist))
    
    return message
In [28]:
def draw_lines(img, left_fit, right_fit, minv):
    ploty = np.linspace(0, img.shape[0]-1, img.shape[0])
    color_warp = np.zeros_like(img).astype(np.uint8)
    
    # Find left and right points.
    left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
    right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
    
    # Recast the x and y points into usable format for cv2.fillPoly()
    pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
    pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
    pts = np.hstack((pts_left, pts_right))
    
    # Draw the lane onto the warped blank image
    cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
    
    # Warp the blank back to original image space using inverse perspective matrix 
    unwarp_img = cv2.warpPerspective(color_warp, minv, (img.shape[1], img.shape[0]))
    return cv2.addWeighted(img, 1, unwarp_img, 0.3, 0)
In [29]:
def show_curvatures(img, leftx, rightx, xmtr_per_pixel, ymtr_per_pixel):
    (left_curvature, right_curvature) = radius_curvature(img, leftx, rightx, xmtr_per_pixel, ymtr_per_pixel)
    dist_txt = dist_from_center(img, leftx, rightx, xmtr_per_pixel, ymtr_per_pixel)
    
    out_img = np.copy(img)
    cv2.putText(out_img, 'Left lane curvature: {:.2f} m'.format(left_curvature), 
                (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,255,255), 2)
    cv2.putText(out_img, 'Right lane curvature: {:.2f} m'.format(right_curvature), 
                (50, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,255,255), 2)
    cv2.putText(out_img, dist_txt, (50, 150), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,255,255), 2)
    
    return out_img
In [30]:
for image in images:    
    img = undistort(image, objpoints, imgpoints)
    
    xabs_bin = abs_thresh(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh)
    yabs_bin = abs_thresh(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh, direction='y')
    magn_bin = mag_threshold(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh)
    dirn_bin = dir_threshold(img, sobel_kernel=15, thresh=dir_thresh)
    
    # Combine all gradient thresholds
    combined = np.zeros_like(dirn_bin)
    combined[((xabs_bin ==1) & (yabs_bin == 1)) | ((magn_bin ==1) & (dirn_bin == 1)) ] = 1
    
    # Apply color threshold
    color_bin = get_hls_thresh_img(img, thresh=color_thresh)
    
    # combine all
    combined_binary = np.zeros_like(combined)
    combined_binary[(color_bin == 1) | (combined == 1)] = 1
    
    warped, warp_matrix, unwarp_matrix, out_img_orig, out_warped_img = transform_image(combined_binary)
    
    xmtr_per_pixel=3.7/700
    ymtr_per_pixel=30/720
    
    left_fit, right_fit, left_fitx, right_fitx, left_lane_indices, right_lane_indices, out_img = fit_polynomial(warped, nwindows=12, show=False)
    lane_img = draw_lines(img, left_fit, right_fit, minv)
    out_img = show_curvatures(lane_img, left_fit, right_fit, xmtr_per_pixel, ymtr_per_pixel)
    
    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,8))
    f.tight_layout()
    ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
    ax1.set_title("Original:: " + image , fontsize=18)
    ax2.imshow(cv2.cvtColor(out_img, cv2.COLOR_BGR2RGB))
    ax2.set_title("Lane:: "+ image, fontsize=18)
    f.savefig(OUTDIR + "/op_" + str(time.time()) + ".jpg")

Pipeline for video

In [31]:
class Lane():
    def __init__(self, max_counter):
        self.current_fit_left=None
        self.best_fit_left = None
        self.history_left = [np.array([False])] 
        self.current_fit_right=None
        self.best_fit_right = None
        self.history_right = [np.array([False])] 
        self.counter = 0
        self.max_counter = 1
        self.src = None
        self.dst = None
        
    def set_presp_indices(self, src, dest):
        self.src = src
        self.dst = dst
        
    def reset(self):
        self.current_fit_left=None
        self.best_fit_left = None
        self.history_left =[np.array([False])] 
        self.current_fit_right = None
        self.best_fit_right = None
        self.history_right =[np.array([False])] 
        self.counter = 0
        
    def update_fit(self, left_fit, right_fit):
        if self.counter > self.max_counter:
            self.reset()
        else:
            self.current_fit_left = left_fit
            self.current_fit_right = right_fit
            self.history_left.append(left_fit)
            self.history_right.append(right_fit)
            self.history_left = self.history_left[-self.max_counter:] if len(self.history_left) > self.max_counter else self.history_left
            self.history_right = self.history_right[-self.max_counter:] if len(self.history_right) > self.max_counter else self.history_right
            self.best_fit_left = np.mean(self.history_left, axis=0)
            self.best_fit_right = np.mean(self.history_right, axis=0)
        
    def process_image(self, image):
        img = undistort_no_read(image, objpoints, imgpoints)
    
        xabs_bin = abs_thresh(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh)
        yabs_bin = abs_thresh(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh, direction='y')
        magn_bin = mag_threshold(img, sobel_kernel=kernel_size, mag_thresh=mag_thresh)
        dirn_bin = dir_threshold(img, sobel_kernel=15, thresh=dir_thresh)
    
        # Combine all gradient thresholds
        combined = np.zeros_like(dirn_bin)
        combined[((xabs_bin ==1) & (yabs_bin == 1)) | ((magn_bin ==1) & (dirn_bin == 1)) ] = 1
    
        # Apply color threshold
        color_bin = get_hls_thresh_img(img, thresh=color_thresh)
    
        # combine all
        combined_binary = np.zeros_like(combined)
        combined_binary[(color_bin == 1) | (combined == 1)] = 1
    
        if self.src is not None and self.dst is not None:
            warped, warp_matrix, unwarp_matrix, out_img_orig, out_warped_img = transform_image(combined_binary, src=self.src, dst= self.dst)
        else:
            warped, warp_matrix, unwarp_matrix, out_img_orig, out_warped_img = transform_image(combined_binary)
    
        xmtr_per_pixel=3.7/700
        ymtr_per_pixel=30/720
    
        if self.best_fit_left is None or self.best_fit_right is None:
            left_fit, right_fit, left_fitx, right_fitx, left_lane_indices, right_lane_indices, out_img = fit_polynomial(warped, nwindows=15, show=False)
        else:
            left_fit, right_fit, left_lane_indices, right_lane_indices= search_around_poly(warped, self.best_fit_left, self.best_fit_right, xmtr_per_pixel, ymtr_per_pixel)
            
        self.counter += 1
        
        lane_img = draw_lines(img, left_fit, right_fit, unwarp_matrix)
        out_img = show_curvatures(lane_img, left_fit, right_fit, xmtr_per_pixel, ymtr_per_pixel)
        
        self.update_fit(left_fit, right_fit)
        
        return out_img
In [32]:
clip1 = VideoFileClip("project_video.mp4")
img = clip1.get_frame(0)

leftupper  = (590, 460)
rightupper = (715, 460)
leftlower  = (290, img.shape[0])
rightlower = (1130, img.shape[0])
    
color_r = [255, 0, 0]
color_g = [0, 255, 0]
line_width = 5
    
src = np.float32([leftupper, leftlower, rightupper, rightlower])

cv2.line(img, leftlower, leftupper, color_r, line_width)
cv2.line(img, leftlower, rightlower, color_r , line_width * 2)
cv2.line(img, rightupper, rightlower, color_r, line_width)
cv2.line(img, rightupper, leftupper, color_g, line_width)

plt.imshow(img)
Out[32]:
<matplotlib.image.AxesImage at 0x7fd88c32fa58>
In [33]:
lane1 = Lane(max_counter=5)

leftupper  = (590, 460)
rightupper = (715, 460)
leftlower  = (290, img.shape[0])
rightlower = (1130, img.shape[0])
    
warped_leftupper = (250,0)
warped_rightupper = (250, img.shape[0])
warped_leftlower = (1050, 0)
warped_rightlower = (1050, img.shape[0])

src = np.float32([leftupper, leftlower, rightupper, rightlower])
dst = np.float32([warped_leftupper, warped_rightupper, warped_leftlower, warped_rightlower])

lane1.set_presp_indices(src, dst)

output = "test_videos_output/project.mp4"
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(lane1.process_image)
%time white_clip.write_videofile(output, audio=False)
t:   0%|          | 0/1260 [00:00<?, ?it/s, now=None]
Moviepy - Building video test_videos_output/project.mp4.
Moviepy - Writing video test_videos_output/project.mp4

                                                                
Moviepy - Done !
Moviepy - video ready test_videos_output/project.mp4
CPU times: user 59min 11s, sys: 8.76 s, total: 59min 20s
Wall time: 36min 7s
In [35]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(output))
Out[35]:
In [79]:
clip1 = VideoFileClip("challenge_video.mp4")
image = clip1.get_frame(1)

leftupper  = (625, 460)
rightupper = (710, 460)
leftlower  = (280, image.shape[0])
rightlower = (1100, image.shape[0])

color_r = [255, 0, 0]
color_g = [0, 255, 0]
line_width = 5
    
src = np.float32([leftupper, leftlower, rightupper, rightlower])

cv2.line(image, leftlower, leftupper, color_r, line_width)
cv2.line(image, leftlower, rightlower, color_r , line_width * 2)
cv2.line(image, rightupper, rightlower, color_r, line_width)
cv2.line(image, rightupper, leftupper, color_g, line_width)

plt.imshow(image)
Out[79]:
<matplotlib.image.AxesImage at 0x7fd873eefeb8>
In [72]:
lane2 = Lane(max_counter=7)

leftupper  = (625, 460)
rightupper = (710, 460)
leftlower  = (280, img.shape[0])
rightlower = (1100, img.shape[0])
    
warped_leftupper = (250,0)
warped_rightupper = (250, img.shape[0])
warped_leftlower = (1050, 0)
warped_rightlower = (1050, img.shape[0])

src = np.float32([leftupper, leftlower, rightupper, rightlower])
dst = np.float32([warped_leftupper, warped_rightupper, warped_leftlower, warped_rightlower])

lane2.set_presp_indices(src, dst)

output = "test_videos_output/challenge_video.mp4"
clip1 = VideoFileClip("challenge_video.mp4")
white_clip = clip1.fl_image(lane2.process_image)
%time white_clip.write_videofile(output, audio=False)
t:  48%|████▊     | 233/485 [14:32<07:35,  1.81s/it, now=None]
t:   0%|          | 0/485 [00:00<?, ?it/s, now=None]
Moviepy - Building video test_videos_output/challenge_video.mp4.
Moviepy - Writing video test_videos_output/challenge_video.mp4

t:   0%|          | 2/485 [00:01<05:53,  1.37it/s, now=None]
t:   1%|          | 3/485 [00:03<08:11,  1.02s/it, now=None]
t:   1%|          | 4/485 [00:04<10:05,  1.26s/it, now=None]
t:   1%|          | 5/485 [00:06<11:06,  1.39s/it, now=None]
t:   1%|          | 6/485 [00:08<11:20,  1.42s/it, now=None]
t:   1%|▏         | 7/485 [00:09<12:09,  1.53s/it, now=None]
t:   2%|▏         | 8/485 [00:11<12:50,  1.62s/it, now=None]
t:   2%|▏         | 9/485 [00:13<12:37,  1.59s/it, now=None]
t:   2%|▏         | 10/485 [00:14<12:42,  1.61s/it, now=None]
t:   2%|▏         | 11/485 [00:16<13:10,  1.67s/it, now=None]
t:   2%|▏         | 12/485 [00:18<13:31,  1.72s/it, now=None]
t:   3%|▎         | 13/485 [00:20<14:01,  1.78s/it, now=None]
t:   3%|▎         | 14/485 [00:22<14:06,  1.80s/it, now=None]
t:   3%|▎         | 15/485 [00:24<14:07,  1.80s/it, now=None]
t:   3%|▎         | 16/485 [00:25<14:08,  1.81s/it, now=None]
t:   4%|▎         | 17/485 [00:27<14:10,  1.82s/it, now=None]
t:   4%|▎         | 18/485 [00:29<14:08,  1.82s/it, now=None]
t:   4%|▍         | 19/485 [00:31<14:07,  1.82s/it, now=None]
t:   4%|▍         | 20/485 [00:33<14:13,  1.83s/it, now=None]
t:   4%|▍         | 21/485 [00:35<14:07,  1.83s/it, now=None]
t:   5%|▍         | 22/485 [00:36<14:02,  1.82s/it, now=None]
t:   5%|▍         | 23/485 [00:38<14:02,  1.82s/it, now=None]
t:   5%|▍         | 24/485 [00:40<13:57,  1.82s/it, now=None]
t:   5%|▌         | 25/485 [00:42<13:54,  1.81s/it, now=None]
t:   5%|▌         | 26/485 [00:44<13:36,  1.78s/it, now=None]
t:   6%|▌         | 27/485 [00:45<12:51,  1.69s/it, now=None]
t:   6%|▌         | 28/485 [00:47<12:59,  1.71s/it, now=None]
t:   6%|▌         | 29/485 [00:49<13:10,  1.73s/it, now=None]
t:   6%|▌         | 30/485 [00:50<13:23,  1.77s/it, now=None]
t:   6%|▋         | 31/485 [00:52<13:25,  1.78s/it, now=None]
t:   7%|▋         | 32/485 [00:54<13:30,  1.79s/it, now=None]
t:   7%|▋         | 33/485 [00:56<13:37,  1.81s/it, now=None]
t:   7%|▋         | 34/485 [00:58<13:36,  1.81s/it, now=None]
t:   7%|▋         | 35/485 [01:00<13:29,  1.80s/it, now=None]
t:   7%|▋         | 36/485 [01:01<13:28,  1.80s/it, now=None]
t:   8%|▊         | 37/485 [01:03<13:27,  1.80s/it, now=None]
t:   8%|▊         | 38/485 [01:05<13:34,  1.82s/it, now=None]
t:   8%|▊         | 39/485 [01:07<13:33,  1.82s/it, now=None]
t:   8%|▊         | 40/485 [01:09<13:39,  1.84s/it, now=None]
t:   8%|▊         | 41/485 [01:11<13:56,  1.88s/it, now=None]
t:   9%|▊         | 42/485 [01:13<13:57,  1.89s/it, now=None]
t:   9%|▉         | 43/485 [01:14<13:46,  1.87s/it, now=None]
t:   9%|▉         | 44/485 [01:16<13:35,  1.85s/it, now=None]
t:   9%|▉         | 45/485 [01:18<13:23,  1.83s/it, now=None]
t:   9%|▉         | 46/485 [01:20<13:27,  1.84s/it, now=None]
t:  10%|▉         | 47/485 [01:22<13:17,  1.82s/it, now=None]
t:  10%|▉         | 48/485 [01:23<13:15,  1.82s/it, now=None]
t:  10%|█         | 49/485 [01:25<13:16,  1.83s/it, now=None]
t:  10%|█         | 50/485 [01:27<13:16,  1.83s/it, now=None]
t:  11%|█         | 51/485 [01:29<13:15,  1.83s/it, now=None]
t:  11%|█         | 52/485 [01:31<13:10,  1.83s/it, now=None]
t:  11%|█         | 53/485 [01:33<13:09,  1.83s/it, now=None]
t:  11%|█         | 54/485 [01:34<13:04,  1.82s/it, now=None]
t:  11%|█▏        | 55/485 [01:36<12:57,  1.81s/it, now=None]
t:  12%|█▏        | 56/485 [01:38<13:04,  1.83s/it, now=None]
t:  12%|█▏        | 57/485 [01:40<13:03,  1.83s/it, now=None]
t:  12%|█▏        | 58/485 [01:42<12:59,  1.83s/it, now=None]
t:  12%|█▏        | 59/485 [01:44<12:59,  1.83s/it, now=None]
t:  12%|█▏        | 60/485 [01:45<12:54,  1.82s/it, now=None]
t:  13%|█▎        | 61/485 [01:47<12:52,  1.82s/it, now=None]
t:  13%|█▎        | 62/485 [01:49<12:51,  1.82s/it, now=None]
t:  13%|█▎        | 63/485 [01:51<12:54,  1.84s/it, now=None]
t:  13%|█▎        | 64/485 [01:53<12:48,  1.82s/it, now=None]
t:  13%|█▎        | 65/485 [01:54<12:41,  1.81s/it, now=None]
t:  14%|█▎        | 66/485 [01:56<12:51,  1.84s/it, now=None]
t:  14%|█▍        | 67/485 [01:58<12:43,  1.83s/it, now=None]
t:  14%|█▍        | 68/485 [02:00<12:39,  1.82s/it, now=None]
t:  14%|█▍        | 69/485 [02:02<12:35,  1.82s/it, now=None]
t:  14%|█▍        | 70/485 [02:04<12:32,  1.81s/it, now=None]
t:  15%|█▍        | 71/485 [02:05<12:27,  1.81s/it, now=None]
t:  15%|█▍        | 72/485 [02:07<12:23,  1.80s/it, now=None]
t:  15%|█▌        | 73/485 [02:09<12:27,  1.81s/it, now=None]
t:  15%|█▌        | 74/485 [02:11<12:25,  1.81s/it, now=None]
t:  15%|█▌        | 75/485 [02:13<12:23,  1.81s/it, now=None]
t:  16%|█▌        | 76/485 [02:15<12:40,  1.86s/it, now=None]
t:  16%|█▌        | 77/485 [02:16<12:40,  1.86s/it, now=None]
t:  16%|█▌        | 78/485 [02:18<12:34,  1.85s/it, now=None]
t:  16%|█▋        | 79/485 [02:20<12:30,  1.85s/it, now=None]
t:  16%|█▋        | 80/485 [02:22<12:24,  1.84s/it, now=None]
t:  17%|█▋        | 81/485 [02:24<12:17,  1.83s/it, now=None]
t:  17%|█▋        | 82/485 [02:26<12:25,  1.85s/it, now=None]
t:  17%|█▋        | 83/485 [02:27<12:24,  1.85s/it, now=None]
t:  17%|█▋        | 84/485 [02:29<12:14,  1.83s/it, now=None]
t:  18%|█▊        | 85/485 [02:31<12:12,  1.83s/it, now=None]
t:  18%|█▊        | 86/485 [02:33<12:09,  1.83s/it, now=None]
t:  18%|█▊        | 87/485 [02:35<12:07,  1.83s/it, now=None]
t:  18%|█▊        | 88/485 [02:37<11:59,  1.81s/it, now=None]
t:  18%|█▊        | 89/485 [02:38<11:58,  1.82s/it, now=None]
t:  19%|█▊        | 90/485 [02:40<11:52,  1.80s/it, now=None]
t:  19%|█▉        | 91/485 [02:42<11:46,  1.79s/it, now=None]
t:  19%|█▉        | 92/485 [02:44<11:57,  1.83s/it, now=None]
t:  19%|█▉        | 93/485 [02:46<11:58,  1.83s/it, now=None]
t:  19%|█▉        | 94/485 [02:47<11:51,  1.82s/it, now=None]
t:  20%|█▉        | 95/485 [02:49<11:46,  1.81s/it, now=None]
t:  20%|█▉        | 96/485 [02:51<11:47,  1.82s/it, now=None]
t:  20%|██        | 97/485 [02:53<11:41,  1.81s/it, now=None]
t:  20%|██        | 98/485 [02:55<11:36,  1.80s/it, now=None]
t:  20%|██        | 99/485 [02:56<11:40,  1.81s/it, now=None]
t:  21%|██        | 100/485 [02:58<11:37,  1.81s/it, now=None]
t:  21%|██        | 101/485 [03:00<11:35,  1.81s/it, now=None]
t:  21%|██        | 102/485 [03:02<11:32,  1.81s/it, now=None]
t:  21%|██        | 103/485 [03:04<11:27,  1.80s/it, now=None]
t:  21%|██▏       | 104/485 [03:05<11:26,  1.80s/it, now=None]
t:  22%|██▏       | 105/485 [03:07<11:26,  1.81s/it, now=None]
t:  22%|██▏       | 106/485 [03:09<11:25,  1.81s/it, now=None]
t:  22%|██▏       | 107/485 [03:11<11:27,  1.82s/it, now=None]
t:  22%|██▏       | 108/485 [03:13<11:34,  1.84s/it, now=None]
t:  22%|██▏       | 109/485 [03:15<11:30,  1.84s/it, now=None]
t:  23%|██▎       | 110/485 [03:16<11:25,  1.83s/it, now=None]
t:  23%|██▎       | 111/485 [03:18<11:22,  1.82s/it, now=None]
t:  23%|██▎       | 112/485 [03:20<11:22,  1.83s/it, now=None]
t:  23%|██▎       | 113/485 [03:22<11:26,  1.85s/it, now=None]
t:  24%|██▎       | 114/485 [03:24<11:17,  1.83s/it, now=None]
t:  24%|██▎       | 115/485 [03:26<11:21,  1.84s/it, now=None]
t:  24%|██▍       | 116/485 [03:28<11:16,  1.83s/it, now=None]
t:  24%|██▍       | 117/485 [03:29<11:09,  1.82s/it, now=None]
t:  24%|██▍       | 118/485 [03:31<11:07,  1.82s/it, now=None]
t:  25%|██▍       | 119/485 [03:33<11:05,  1.82s/it, now=None]
t:  25%|██▍       | 120/485 [03:35<11:08,  1.83s/it, now=None]
t:  25%|██▍       | 121/485 [03:37<11:01,  1.82s/it, now=None]
t:  25%|██▌       | 122/485 [03:38<11:00,  1.82s/it, now=None]
t:  25%|██▌       | 123/485 [03:40<10:55,  1.81s/it, now=None]
t:  26%|██▌       | 124/485 [03:42<10:53,  1.81s/it, now=None]
t:  26%|██▌       | 125/485 [03:44<10:53,  1.81s/it, now=None]
t:  26%|██▌       | 126/485 [03:46<10:52,  1.82s/it, now=None]
t:  26%|██▌       | 127/485 [03:47<10:47,  1.81s/it, now=None]
t:  26%|██▋       | 128/485 [03:49<10:43,  1.80s/it, now=None]
t:  27%|██▋       | 129/485 [03:51<10:43,  1.81s/it, now=None]
t:  27%|██▋       | 130/485 [03:53<10:39,  1.80s/it, now=None]
t:  27%|██▋       | 131/485 [03:55<10:38,  1.80s/it, now=None]
t:  27%|██▋       | 132/485 [03:56<10:41,  1.82s/it, now=None]
t:  27%|██▋       | 133/485 [03:58<10:44,  1.83s/it, now=None]
t:  28%|██▊       | 134/485 [04:00<10:40,  1.82s/it, now=None]
t:  28%|██▊       | 135/485 [04:02<10:37,  1.82s/it, now=None]
t:  28%|██▊       | 136/485 [04:04<10:33,  1.81s/it, now=None]
t:  28%|██▊       | 137/485 [04:06<10:28,  1.81s/it, now=None]
t:  28%|██▊       | 138/485 [04:07<10:27,  1.81s/it, now=None]
t:  29%|██▊       | 139/485 [04:09<10:36,  1.84s/it, now=None]
t:  29%|██▉       | 140/485 [04:11<10:36,  1.84s/it, now=None]
t:  29%|██▉       | 141/485 [04:13<10:29,  1.83s/it, now=None]
t:  29%|██▉       | 142/485 [04:15<10:24,  1.82s/it, now=None]
t:  29%|██▉       | 143/485 [04:17<10:18,  1.81s/it, now=None]
t:  30%|██▉       | 144/485 [04:19<10:36,  1.87s/it, now=None]
t:  30%|██▉       | 145/485 [04:20<10:32,  1.86s/it, now=None]
t:  30%|███       | 146/485 [04:22<10:22,  1.84s/it, now=None]
t:  30%|███       | 147/485 [04:24<10:18,  1.83s/it, now=None]
t:  31%|███       | 148/485 [04:26<10:27,  1.86s/it, now=None]
t:  31%|███       | 149/485 [04:28<10:19,  1.84s/it, now=None]
t:  31%|███       | 150/485 [04:29<10:13,  1.83s/it, now=None]
t:  31%|███       | 151/485 [04:31<10:12,  1.83s/it, now=None]
t:  31%|███▏      | 152/485 [04:33<10:10,  1.83s/it, now=None]
t:  32%|███▏      | 153/485 [04:35<10:09,  1.83s/it, now=None]
t:  32%|███▏      | 154/485 [04:37<10:02,  1.82s/it, now=None]
t:  32%|███▏      | 155/485 [04:39<10:00,  1.82s/it, now=None]
t:  32%|███▏      | 156/485 [04:40<09:57,  1.82s/it, now=None]
t:  32%|███▏      | 157/485 [04:42<09:52,  1.81s/it, now=None]
t:  33%|███▎      | 158/485 [04:44<09:52,  1.81s/it, now=None]
t:  33%|███▎      | 159/485 [04:46<09:49,  1.81s/it, now=None]
t:  33%|███▎      | 160/485 [04:48<09:44,  1.80s/it, now=None]
t:  33%|███▎      | 161/485 [04:49<09:43,  1.80s/it, now=None]
t:  33%|███▎      | 162/485 [04:51<09:44,  1.81s/it, now=None]
t:  34%|███▎      | 163/485 [04:53<09:45,  1.82s/it, now=None]
t:  34%|███▍      | 164/485 [04:55<09:40,  1.81s/it, now=None]
t:  34%|███▍      | 165/485 [04:57<09:38,  1.81s/it, now=None]
t:  34%|███▍      | 166/485 [04:58<09:34,  1.80s/it, now=None]
t:  34%|███▍      | 167/485 [05:00<09:30,  1.79s/it, now=None]
t:  35%|███▍      | 168/485 [05:02<09:29,  1.80s/it, now=None]
t:  35%|███▍      | 169/485 [05:04<09:26,  1.79s/it, now=None]
t:  35%|███▌      | 170/485 [05:06<09:24,  1.79s/it, now=None]
t:  35%|███▌      | 171/485 [05:07<09:23,  1.79s/it, now=None]
t:  35%|███▌      | 172/485 [05:09<09:24,  1.80s/it, now=None]
t:  36%|███▌      | 173/485 [05:11<09:20,  1.80s/it, now=None]
t:  36%|███▌      | 174/485 [05:13<09:18,  1.80s/it, now=None]
t:  36%|███▌      | 175/485 [05:15<09:18,  1.80s/it, now=None]
t:  36%|███▋      | 176/485 [05:16<09:20,  1.81s/it, now=None]
t:  36%|███▋      | 177/485 [05:18<09:30,  1.85s/it, now=None]
t:  37%|███▋      | 178/485 [05:20<09:32,  1.86s/it, now=None]
t:  37%|███▋      | 179/485 [05:22<09:27,  1.85s/it, now=None]
t:  37%|███▋      | 180/485 [05:24<09:35,  1.89s/it, now=None]
t:  37%|███▋      | 181/485 [05:26<09:39,  1.90s/it, now=None]
t:  38%|███▊      | 182/485 [05:28<09:30,  1.88s/it, now=None]
t:  38%|███▊      | 183/485 [05:30<09:20,  1.86s/it, now=None]
t:  38%|███▊      | 184/485 [05:32<09:19,  1.86s/it, now=None]
t:  38%|███▊      | 185/485 [05:33<09:10,  1.83s/it, now=None]
t:  38%|███▊      | 186/485 [05:35<09:04,  1.82s/it, now=None]
t:  39%|███▊      | 187/485 [05:37<08:56,  1.80s/it, now=None]
t:  39%|███▉      | 188/485 [05:39<08:57,  1.81s/it, now=None]
t:  39%|███▉      | 189/485 [05:41<08:57,  1.82s/it, now=None]
t:  39%|███▉      | 190/485 [05:42<09:00,  1.83s/it, now=None]
t:  39%|███▉      | 191/485 [05:44<08:58,  1.83s/it, now=None]
t:  40%|███▉      | 192/485 [05:46<08:55,  1.83s/it, now=None]
t:  40%|███▉      | 193/485 [05:48<08:59,  1.85s/it, now=None]
t:  40%|████      | 194/485 [05:50<09:07,  1.88s/it, now=None]
t:  40%|████      | 195/485 [05:52<09:07,  1.89s/it, now=None]
t:  40%|████      | 196/485 [05:54<08:59,  1.87s/it, now=None]
t:  41%|████      | 197/485 [05:55<08:52,  1.85s/it, now=None]
t:  41%|████      | 198/485 [05:57<09:10,  1.92s/it, now=None]
t:  41%|████      | 199/485 [05:59<09:02,  1.90s/it, now=None]
t:  41%|████      | 200/485 [06:01<08:50,  1.86s/it, now=None]
t:  41%|████▏     | 201/485 [06:03<08:48,  1.86s/it, now=None]
t:  42%|████▏     | 202/485 [06:05<08:42,  1.85s/it, now=None]
t:  42%|████▏     | 203/485 [06:07<08:34,  1.82s/it, now=None]
t:  42%|████▏     | 204/485 [06:08<08:30,  1.82s/it, now=None]
t:  42%|████▏     | 205/485 [06:10<08:32,  1.83s/it, now=None]
t:  42%|████▏     | 206/485 [06:12<08:27,  1.82s/it, now=None]
t:  43%|████▎     | 207/485 [06:14<08:25,  1.82s/it, now=None]
t:  43%|████▎     | 208/485 [06:16<08:36,  1.87s/it, now=None]
t:  43%|████▎     | 209/485 [06:18<08:30,  1.85s/it, now=None]
t:  43%|████▎     | 210/485 [06:19<08:28,  1.85s/it, now=None]
t:  44%|████▎     | 211/485 [06:21<08:25,  1.84s/it, now=None]
t:  44%|████▎     | 212/485 [06:23<08:20,  1.83s/it, now=None]
t:  44%|████▍     | 213/485 [06:25<08:15,  1.82s/it, now=None]
t:  44%|████▍     | 214/485 [06:27<08:20,  1.85s/it, now=None]
t:  44%|████▍     | 215/485 [06:29<08:19,  1.85s/it, now=None]
t:  45%|████▍     | 216/485 [06:30<08:14,  1.84s/it, now=None]
t:  45%|████▍     | 217/485 [06:32<08:14,  1.85s/it, now=None]
t:  45%|████▍     | 218/485 [06:34<08:29,  1.91s/it, now=None]
t:  45%|████▌     | 219/485 [06:36<08:28,  1.91s/it, now=None]
t:  45%|████▌     | 220/485 [06:38<08:18,  1.88s/it, now=None]
t:  46%|████▌     | 221/485 [06:40<08:08,  1.85s/it, now=None]
t:  46%|████▌     | 222/485 [06:42<08:06,  1.85s/it, now=None]
t:  46%|████▌     | 223/485 [06:44<08:10,  1.87s/it, now=None]
t:  46%|████▌     | 224/485 [06:46<08:08,  1.87s/it, now=None]
t:  46%|████▋     | 225/485 [06:47<08:04,  1.86s/it, now=None]
t:  47%|████▋     | 226/485 [06:49<07:57,  1.84s/it, now=None]
t:  47%|████▋     | 227/485 [06:51<07:57,  1.85s/it, now=None]
t:  47%|████▋     | 228/485 [06:53<07:50,  1.83s/it, now=None]
t:  47%|████▋     | 229/485 [06:55<07:44,  1.82s/it, now=None]
t:  47%|████▋     | 230/485 [06:56<07:46,  1.83s/it, now=None]
t:  48%|████▊     | 231/485 [06:58<07:42,  1.82s/it, now=None]
t:  48%|████▊     | 232/485 [07:00<07:38,  1.81s/it, now=None]
t:  48%|████▊     | 233/485 [07:02<07:39,  1.82s/it, now=None]
t:  48%|████▊     | 234/485 [07:04<07:37,  1.82s/it, now=None]
t:  48%|████▊     | 235/485 [07:06<07:38,  1.83s/it, now=None]
t:  49%|████▊     | 236/485 [07:07<07:37,  1.84s/it, now=None]
t:  49%|████▉     | 237/485 [07:09<07:41,  1.86s/it, now=None]
t:  49%|████▉     | 238/485 [07:11<07:43,  1.88s/it, now=None]
t:  49%|████▉     | 239/485 [07:13<07:42,  1.88s/it, now=None]
t:  49%|████▉     | 240/485 [07:15<07:36,  1.86s/it, now=None]
t:  50%|████▉     | 241/485 [07:17<07:35,  1.87s/it, now=None]
t:  50%|████▉     | 242/485 [07:19<07:35,  1.87s/it, now=None]
t:  50%|█████     | 243/485 [07:21<07:31,  1.87s/it, now=None]
t:  50%|█████     | 244/485 [07:22<07:25,  1.85s/it, now=None]
t:  51%|█████     | 245/485 [07:24<07:22,  1.84s/it, now=None]
t:  51%|█████     | 246/485 [07:26<07:26,  1.87s/it, now=None]
t:  51%|█████     | 247/485 [07:28<07:19,  1.85s/it, now=None]
t:  51%|█████     | 248/485 [07:30<07:13,  1.83s/it, now=None]
t:  51%|█████▏    | 249/485 [07:32<07:12,  1.83s/it, now=None]
t:  52%|█████▏    | 250/485 [07:33<07:08,  1.82s/it, now=None]
t:  52%|█████▏    | 251/485 [07:35<07:05,  1.82s/it, now=None]
t:  52%|█████▏    | 252/485 [07:37<07:04,  1.82s/it, now=None]
t:  52%|█████▏    | 253/485 [07:39<07:01,  1.82s/it, now=None]
t:  52%|█████▏    | 254/485 [07:41<07:01,  1.83s/it, now=None]
t:  53%|█████▎    | 255/485 [07:42<06:57,  1.82s/it, now=None]
t:  53%|█████▎    | 256/485 [07:44<06:56,  1.82s/it, now=None]
t:  53%|█████▎    | 257/485 [07:46<06:51,  1.81s/it, now=None]
t:  53%|█████▎    | 258/485 [07:48<06:51,  1.81s/it, now=None]
t:  53%|█████▎    | 259/485 [07:50<06:52,  1.83s/it, now=None]
t:  54%|█████▎    | 260/485 [07:52<06:49,  1.82s/it, now=None]
t:  54%|█████▍    | 261/485 [07:53<06:46,  1.81s/it, now=None]
t:  54%|█████▍    | 262/485 [07:55<06:44,  1.81s/it, now=None]
t:  54%|█████▍    | 263/485 [07:57<06:47,  1.84s/it, now=None]
t:  54%|█████▍    | 264/485 [07:59<06:47,  1.84s/it, now=None]
t:  55%|█████▍    | 265/485 [08:01<06:44,  1.84s/it, now=None]
t:  55%|█████▍    | 266/485 [08:03<06:45,  1.85s/it, now=None]
t:  55%|█████▌    | 267/485 [08:04<06:39,  1.83s/it, now=None]
t:  55%|█████▌    | 268/485 [08:06<06:36,  1.83s/it, now=None]
t:  55%|█████▌    | 269/485 [08:08<06:36,  1.83s/it, now=None]
t:  56%|█████▌    | 270/485 [08:10<06:33,  1.83s/it, now=None]
t:  56%|█████▌    | 271/485 [08:12<06:30,  1.82s/it, now=None]
t:  56%|█████▌    | 272/485 [08:14<06:36,  1.86s/it, now=None]
t:  56%|█████▋    | 273/485 [08:16<06:34,  1.86s/it, now=None]
t:  56%|█████▋    | 274/485 [08:17<06:31,  1.86s/it, now=None]
t:  57%|█████▋    | 275/485 [08:19<06:32,  1.87s/it, now=None]
t:  57%|█████▋    | 276/485 [08:21<06:28,  1.86s/it, now=None]
t:  57%|█████▋    | 277/485 [08:23<06:23,  1.85s/it, now=None]
t:  57%|█████▋    | 278/485 [08:25<06:18,  1.83s/it, now=None]
t:  58%|█████▊    | 279/485 [08:27<06:14,  1.82s/it, now=None]
t:  58%|█████▊    | 280/485 [08:28<06:12,  1.82s/it, now=None]
t:  58%|█████▊    | 281/485 [08:30<06:09,  1.81s/it, now=None]
t:  58%|█████▊    | 282/485 [08:32<06:10,  1.83s/it, now=None]
t:  58%|█████▊    | 283/485 [08:34<06:07,  1.82s/it, now=None]
t:  59%|█████▊    | 284/485 [08:36<06:04,  1.81s/it, now=None]
t:  59%|█████▉    | 285/485 [08:37<06:03,  1.82s/it, now=None]
t:  59%|█████▉    | 286/485 [08:39<06:01,  1.82s/it, now=None]
t:  59%|█████▉    | 287/485 [08:41<05:58,  1.81s/it, now=None]
t:  59%|█████▉    | 288/485 [08:43<05:55,  1.80s/it, now=None]
t:  60%|█████▉    | 289/485 [08:45<05:53,  1.80s/it, now=None]
t:  60%|█████▉    | 290/485 [08:46<05:51,  1.80s/it, now=None]
t:  60%|██████    | 291/485 [08:48<05:49,  1.80s/it, now=None]
t:  60%|██████    | 292/485 [08:50<05:47,  1.80s/it, now=None]
t:  60%|██████    | 293/485 [08:52<05:43,  1.79s/it, now=None]
t:  61%|██████    | 294/485 [08:54<05:40,  1.78s/it, now=None]
t:  61%|██████    | 295/485 [08:55<05:39,  1.78s/it, now=None]
t:  61%|██████    | 296/485 [08:57<05:40,  1.80s/it, now=None]
t:  61%|██████    | 297/485 [08:59<05:38,  1.80s/it, now=None]
t:  61%|██████▏   | 298/485 [09:01<05:37,  1.81s/it, now=None]
t:  62%|██████▏   | 299/485 [09:03<05:38,  1.82s/it, now=None]
t:  62%|██████▏   | 300/485 [09:04<05:34,  1.81s/it, now=None]
t:  62%|██████▏   | 301/485 [09:06<05:31,  1.80s/it, now=None]
t:  62%|██████▏   | 302/485 [09:08<05:31,  1.81s/it, now=None]
t:  62%|██████▏   | 303/485 [09:10<05:28,  1.81s/it, now=None]
t:  63%|██████▎   | 304/485 [09:12<05:27,  1.81s/it, now=None]
t:  63%|██████▎   | 305/485 [09:13<05:25,  1.81s/it, now=None]
t:  63%|██████▎   | 306/485 [09:15<05:23,  1.81s/it, now=None]
t:  63%|██████▎   | 307/485 [09:17<05:23,  1.82s/it, now=None]
t:  64%|██████▎   | 308/485 [09:19<05:25,  1.84s/it, now=None]
t:  64%|██████▎   | 309/485 [09:21<05:26,  1.86s/it, now=None]
t:  64%|██████▍   | 310/485 [09:23<05:31,  1.89s/it, now=None]
t:  64%|██████▍   | 311/485 [09:25<05:28,  1.89s/it, now=None]
t:  64%|██████▍   | 312/485 [09:27<05:26,  1.89s/it, now=None]
t:  65%|██████▍   | 313/485 [09:28<05:19,  1.86s/it, now=None]
t:  65%|██████▍   | 314/485 [09:30<05:12,  1.83s/it, now=None]
t:  65%|██████▍   | 315/485 [09:32<05:06,  1.80s/it, now=None]
t:  65%|██████▌   | 316/485 [09:34<05:08,  1.82s/it, now=None]
t:  65%|██████▌   | 317/485 [09:36<05:12,  1.86s/it, now=None]
t:  66%|██████▌   | 318/485 [09:38<05:11,  1.86s/it, now=None]
t:  66%|██████▌   | 319/485 [09:39<05:06,  1.84s/it, now=None]
t:  66%|██████▌   | 320/485 [09:41<05:02,  1.83s/it, now=None]
t:  66%|██████▌   | 321/485 [09:43<05:01,  1.84s/it, now=None]
t:  66%|██████▋   | 322/485 [09:45<05:00,  1.84s/it, now=None]
t:  67%|██████▋   | 323/485 [09:47<04:56,  1.83s/it, now=None]
t:  67%|██████▋   | 324/485 [09:49<04:52,  1.81s/it, now=None]
t:  67%|██████▋   | 325/485 [09:50<04:53,  1.83s/it, now=None]
t:  67%|██████▋   | 326/485 [09:52<04:51,  1.83s/it, now=None]
t:  67%|██████▋   | 327/485 [09:54<04:49,  1.83s/it, now=None]
t:  68%|██████▊   | 328/485 [09:56<04:46,  1.83s/it, now=None]
t:  68%|██████▊   | 329/485 [09:58<04:45,  1.83s/it, now=None]
t:  68%|██████▊   | 330/485 [10:00<04:44,  1.84s/it, now=None]
t:  68%|██████▊   | 331/485 [10:01<04:43,  1.84s/it, now=None]
t:  68%|██████▊   | 332/485 [10:03<04:40,  1.83s/it, now=None]
t:  69%|██████▊   | 333/485 [10:05<04:35,  1.81s/it, now=None]
t:  69%|██████▉   | 334/485 [10:07<04:33,  1.81s/it, now=None]
t:  69%|██████▉   | 335/485 [10:09<04:32,  1.81s/it, now=None]
t:  69%|██████▉   | 336/485 [10:10<04:29,  1.81s/it, now=None]
t:  69%|██████▉   | 337/485 [10:12<04:32,  1.84s/it, now=None]
t:  70%|██████▉   | 338/485 [10:14<04:36,  1.88s/it, now=None]
t:  70%|██████▉   | 339/485 [10:16<04:30,  1.85s/it, now=None]
t:  70%|███████   | 340/485 [10:18<04:25,  1.83s/it, now=None]
t:  70%|███████   | 341/485 [10:20<04:26,  1.85s/it, now=None]
t:  71%|███████   | 342/485 [10:22<04:22,  1.84s/it, now=None]
t:  71%|███████   | 343/485 [10:23<04:22,  1.85s/it, now=None]
t:  71%|███████   | 344/485 [10:25<04:20,  1.85s/it, now=None]
t:  71%|███████   | 345/485 [10:27<04:23,  1.88s/it, now=None]
t:  71%|███████▏  | 346/485 [10:29<04:18,  1.86s/it, now=None]
t:  72%|███████▏  | 347/485 [10:31<04:11,  1.82s/it, now=None]
t:  72%|███████▏  | 348/485 [10:33<04:07,  1.81s/it, now=None]
t:  72%|███████▏  | 349/485 [10:34<04:07,  1.82s/it, now=None]
t:  72%|███████▏  | 350/485 [10:36<04:04,  1.81s/it, now=None]
t:  72%|███████▏  | 351/485 [10:38<04:00,  1.79s/it, now=None]
t:  73%|███████▎  | 352/485 [10:40<03:57,  1.78s/it, now=None]
t:  73%|███████▎  | 353/485 [10:42<04:04,  1.85s/it, now=None]
t:  73%|███████▎  | 354/485 [10:44<04:08,  1.90s/it, now=None]
t:  73%|███████▎  | 355/485 [10:46<04:05,  1.89s/it, now=None]
t:  73%|███████▎  | 356/485 [10:47<04:03,  1.89s/it, now=None]
t:  74%|███████▎  | 357/485 [10:49<04:00,  1.88s/it, now=None]
t:  74%|███████▍  | 358/485 [10:51<04:01,  1.90s/it, now=None]
t:  74%|███████▍  | 359/485 [10:53<03:55,  1.87s/it, now=None]
t:  74%|███████▍  | 360/485 [10:55<03:53,  1.86s/it, now=None]
t:  74%|███████▍  | 361/485 [10:57<03:50,  1.86s/it, now=None]
t:  75%|███████▍  | 362/485 [10:59<03:48,  1.86s/it, now=None]
t:  75%|███████▍  | 363/485 [11:01<03:46,  1.86s/it, now=None]
t:  75%|███████▌  | 364/485 [11:02<03:44,  1.86s/it, now=None]
t:  75%|███████▌  | 365/485 [11:04<03:40,  1.84s/it, now=None]
t:  75%|███████▌  | 366/485 [11:06<03:36,  1.82s/it, now=None]
t:  76%|███████▌  | 367/485 [11:08<03:36,  1.83s/it, now=None]
t:  76%|███████▌  | 368/485 [11:10<03:33,  1.83s/it, now=None]
t:  76%|███████▌  | 369/485 [11:11<03:30,  1.82s/it, now=None]
t:  76%|███████▋  | 370/485 [11:13<03:28,  1.82s/it, now=None]
t:  76%|███████▋  | 371/485 [11:15<03:31,  1.85s/it, now=None]
t:  77%|███████▋  | 372/485 [11:17<03:29,  1.85s/it, now=None]
t:  77%|███████▋  | 373/485 [11:19<03:27,  1.85s/it, now=None]
t:  77%|███████▋  | 374/485 [11:21<03:25,  1.85s/it, now=None]
t:  77%|███████▋  | 375/485 [11:23<03:21,  1.83s/it, now=None]
t:  78%|███████▊  | 376/485 [11:24<03:18,  1.82s/it, now=None]
t:  78%|███████▊  | 377/485 [11:26<03:16,  1.82s/it, now=None]
t:  78%|███████▊  | 378/485 [11:28<03:15,  1.82s/it, now=None]
t:  78%|███████▊  | 379/485 [11:30<03:12,  1.82s/it, now=None]
t:  78%|███████▊  | 380/485 [11:32<03:12,  1.83s/it, now=None]
t:  79%|███████▊  | 381/485 [11:34<03:12,  1.85s/it, now=None]
t:  79%|███████▉  | 382/485 [11:35<03:10,  1.85s/it, now=None]
t:  79%|███████▉  | 383/485 [11:37<03:06,  1.83s/it, now=None]
t:  79%|███████▉  | 384/485 [11:39<03:03,  1.82s/it, now=None]
t:  79%|███████▉  | 385/485 [11:41<03:00,  1.81s/it, now=None]
t:  80%|███████▉  | 386/485 [11:43<02:59,  1.81s/it, now=None]
t:  80%|███████▉  | 387/485 [11:44<03:00,  1.84s/it, now=None]
t:  80%|████████  | 388/485 [11:46<02:57,  1.83s/it, now=None]
t:  80%|████████  | 389/485 [11:48<02:54,  1.82s/it, now=None]
t:  80%|████████  | 390/485 [11:50<02:52,  1.82s/it, now=None]
t:  81%|████████  | 391/485 [11:52<02:50,  1.81s/it, now=None]
t:  81%|████████  | 392/485 [11:53<02:47,  1.80s/it, now=None]
t:  81%|████████  | 393/485 [11:55<02:45,  1.80s/it, now=None]
t:  81%|████████  | 394/485 [11:57<02:46,  1.83s/it, now=None]
t:  81%|████████▏ | 395/485 [11:59<02:49,  1.88s/it, now=None]
t:  82%|████████▏ | 396/485 [12:01<02:45,  1.87s/it, now=None]
t:  82%|████████▏ | 397/485 [12:03<02:42,  1.85s/it, now=None]
t:  82%|████████▏ | 398/485 [12:05<02:38,  1.83s/it, now=None]
t:  82%|████████▏ | 399/485 [12:06<02:35,  1.81s/it, now=None]
t:  82%|████████▏ | 400/485 [12:08<02:34,  1.81s/it, now=None]
t:  83%|████████▎ | 401/485 [12:10<02:32,  1.82s/it, now=None]
t:  83%|████████▎ | 402/485 [12:12<02:31,  1.82s/it, now=None]
t:  83%|████████▎ | 403/485 [12:14<02:38,  1.93s/it, now=None]
t:  83%|████████▎ | 404/485 [12:16<02:35,  1.92s/it, now=None]
t:  84%|████████▎ | 405/485 [12:18<02:31,  1.90s/it, now=None]
t:  84%|████████▎ | 406/485 [12:20<02:30,  1.90s/it, now=None]
t:  84%|████████▍ | 407/485 [12:22<02:28,  1.90s/it, now=None]
t:  84%|████████▍ | 408/485 [12:23<02:25,  1.89s/it, now=None]
t:  84%|████████▍ | 409/485 [12:25<02:22,  1.87s/it, now=None]
t:  85%|████████▍ | 410/485 [12:27<02:20,  1.88s/it, now=None]
t:  85%|████████▍ | 411/485 [12:29<02:17,  1.86s/it, now=None]
t:  85%|████████▍ | 412/485 [12:31<02:15,  1.85s/it, now=None]
t:  85%|████████▌ | 413/485 [12:33<02:13,  1.85s/it, now=None]
t:  85%|████████▌ | 414/485 [12:34<02:11,  1.85s/it, now=None]
t:  86%|████████▌ | 415/485 [12:36<02:09,  1.84s/it, now=None]
t:  86%|████████▌ | 416/485 [12:38<02:07,  1.84s/it, now=None]
t:  86%|████████▌ | 417/485 [12:40<02:04,  1.83s/it, now=None]
t:  86%|████████▌ | 418/485 [12:42<02:03,  1.85s/it, now=None]
t:  86%|████████▋ | 419/485 [12:44<02:04,  1.88s/it, now=None]
t:  87%|████████▋ | 420/485 [12:46<02:01,  1.86s/it, now=None]
t:  87%|████████▋ | 421/485 [12:47<01:59,  1.86s/it, now=None]
t:  87%|████████▋ | 422/485 [12:49<01:57,  1.87s/it, now=None]
t:  87%|████████▋ | 423/485 [12:51<01:55,  1.87s/it, now=None]
t:  87%|████████▋ | 424/485 [12:53<01:53,  1.86s/it, now=None]
t:  88%|████████▊ | 425/485 [12:55<01:51,  1.86s/it, now=None]
t:  88%|████████▊ | 426/485 [12:57<01:50,  1.87s/it, now=None]
t:  88%|████████▊ | 427/485 [12:59<01:47,  1.86s/it, now=None]
t:  88%|████████▊ | 428/485 [13:00<01:44,  1.83s/it, now=None]
t:  88%|████████▊ | 429/485 [13:02<01:43,  1.84s/it, now=None]
t:  89%|████████▊ | 430/485 [13:04<01:41,  1.85s/it, now=None]
t:  89%|████████▉ | 431/485 [13:06<01:39,  1.85s/it, now=None]
t:  89%|████████▉ | 432/485 [13:08<01:38,  1.85s/it, now=None]
t:  89%|████████▉ | 433/485 [13:10<01:36,  1.85s/it, now=None]
t:  89%|████████▉ | 434/485 [13:12<01:34,  1.85s/it, now=None]
t:  90%|████████▉ | 435/485 [13:13<01:32,  1.85s/it, now=None]
t:  90%|████████▉ | 436/485 [13:15<01:30,  1.85s/it, now=None]
t:  90%|█████████ | 437/485 [13:17<01:31,  1.90s/it, now=None]
t:  90%|█████████ | 438/485 [13:19<01:28,  1.88s/it, now=None]
t:  91%|█████████ | 439/485 [13:21<01:26,  1.87s/it, now=None]
t:  91%|█████████ | 440/485 [13:23<01:24,  1.89s/it, now=None]
t:  91%|█████████ | 441/485 [13:25<01:22,  1.88s/it, now=None]
t:  91%|█████████ | 442/485 [13:27<01:21,  1.89s/it, now=None]
t:  91%|█████████▏| 443/485 [13:28<01:18,  1.87s/it, now=None]
t:  92%|█████████▏| 444/485 [13:30<01:16,  1.86s/it, now=None]
t:  92%|█████████▏| 445/485 [13:32<01:14,  1.87s/it, now=None]
t:  92%|█████████▏| 446/485 [13:34<01:13,  1.90s/it, now=None]
t:  92%|█████████▏| 447/485 [13:36<01:12,  1.90s/it, now=None]
t:  92%|█████████▏| 448/485 [13:38<01:09,  1.89s/it, now=None]
t:  93%|█████████▎| 449/485 [13:40<01:06,  1.85s/it, now=None]
t:  93%|█████████▎| 450/485 [13:41<01:03,  1.83s/it, now=None]
t:  93%|█████████▎| 451/485 [13:43<01:01,  1.81s/it, now=None]
t:  93%|█████████▎| 452/485 [13:45<01:00,  1.82s/it, now=None]
t:  93%|█████████▎| 453/485 [13:47<00:57,  1.79s/it, now=None]
t:  94%|█████████▎| 454/485 [13:49<00:55,  1.79s/it, now=None]
t:  94%|█████████▍| 455/485 [13:51<00:58,  1.95s/it, now=None]
t:  94%|█████████▍| 456/485 [13:53<00:56,  1.95s/it, now=None]
t:  94%|█████████▍| 457/485 [13:55<00:53,  1.92s/it, now=None]
t:  94%|█████████▍| 458/485 [13:57<00:51,  1.92s/it, now=None]
t:  95%|█████████▍| 459/485 [13:59<00:52,  2.03s/it, now=None]
t:  95%|█████████▍| 460/485 [14:01<00:49,  1.98s/it, now=None]
t:  95%|█████████▌| 461/485 [14:03<00:46,  1.94s/it, now=None]
t:  95%|█████████▌| 462/485 [14:04<00:43,  1.89s/it, now=None]
t:  95%|█████████▌| 463/485 [14:06<00:42,  1.92s/it, now=None]
t:  96%|█████████▌| 464/485 [14:08<00:40,  1.92s/it, now=None]
t:  96%|█████████▌| 465/485 [14:10<00:37,  1.89s/it, now=None]
t:  96%|█████████▌| 466/485 [14:12<00:35,  1.86s/it, now=None]
t:  96%|█████████▋| 467/485 [14:14<00:33,  1.88s/it, now=None]
t:  96%|█████████▋| 468/485 [14:16<00:31,  1.87s/it, now=None]
t:  97%|█████████▋| 469/485 [14:18<00:30,  1.88s/it, now=None]
t:  97%|█████████▋| 470/485 [14:19<00:27,  1.86s/it, now=None]
t:  97%|█████████▋| 471/485 [14:21<00:25,  1.86s/it, now=None]
t:  97%|█████████▋| 472/485 [14:23<00:24,  1.89s/it, now=None]
t:  98%|█████████▊| 473/485 [14:25<00:22,  1.90s/it, now=None]
t:  98%|█████████▊| 474/485 [14:27<00:20,  1.88s/it, now=None]
t:  98%|█████████▊| 475/485 [14:29<00:19,  1.93s/it, now=None]
t:  98%|█████████▊| 476/485 [14:31<00:17,  1.93s/it, now=None]
t:  98%|█████████▊| 477/485 [14:33<00:15,  1.91s/it, now=None]
t:  99%|█████████▊| 478/485 [14:35<00:13,  1.90s/it, now=None]
t:  99%|█████████▉| 479/485 [14:36<00:11,  1.86s/it, now=None]
t:  99%|█████████▉| 480/485 [14:38<00:09,  1.84s/it, now=None]
t:  99%|█████████▉| 481/485 [14:40<00:07,  1.83s/it, now=None]
t:  99%|█████████▉| 482/485 [14:42<00:05,  1.94s/it, now=None]
t: 100%|█████████▉| 483/485 [14:45<00:04,  2.09s/it, now=None]
t: 100%|█████████▉| 484/485 [14:47<00:02,  2.14s/it, now=None]
t: 100%|██████████| 485/485 [14:49<00:00,  2.22s/it, now=None]
t:  48%|████▊     | 233/485 [29:26<07:35,  1.81s/it, now=None]
Moviepy - Done !
Moviepy - video ready test_videos_output/challenge_video.mp4
CPU times: user 23min 34s, sys: 28.9 s, total: 24min 3s
Wall time: 14min 54s
In [73]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(output))
Out[73]:
In [42]:
clip1 = VideoFileClip("harder_challenge_video.mp4")
img = clip1.get_frame(0)

leftupper  = (450, 550)
rightupper = (800, 550)
leftlower  = (190, img.shape[0])
rightlower = (1010, img.shape[0])
    
color_r = [255, 0, 0]
color_g = [0, 255, 0]
line_width = 5
    
src = np.float32([leftupper, leftlower, rightupper, rightlower])

cv2.line(img, leftlower, leftupper, color_r, line_width)
cv2.line(img, leftlower, rightlower, color_r , line_width * 2)
cv2.line(img, rightupper, rightlower, color_r, line_width)
cv2.line(img, rightupper, leftupper, color_g, line_width)

plt.imshow(img)
Out[42]:
<matplotlib.image.AxesImage at 0x7fd88c7c5ef0>
In [43]:
lane3 = Lane(max_counter=3)
lane3.reset()

leftupper  = (450, 550)
rightupper = (800, 550)
leftlower  = (190, img.shape[0])
rightlower = (1010, img.shape[0])
    
warped_leftupper = (250,0)
warped_rightupper = (250, img.shape[0])
warped_leftlower = (1050, 0)
warped_rightlower = (1050, img.shape[0])

src = np.float32([leftupper, leftlower, rightupper, rightlower])
dst = np.float32([warped_leftupper, warped_rightupper, warped_leftlower, warped_rightlower])

lane3.set_presp_indices(src, dst)

output = "test_videos_output/harder_challenge_video.mp4"
clip1 = VideoFileClip("harder_challenge_video.mp4")
white_clip = clip1.fl_image(lane3.process_image)
%time white_clip.write_videofile(output, audio=False)
t:   0%|          | 0/1199 [00:00<?, ?it/s, now=None]
Moviepy - Building video test_videos_output/harder_challenge_video.mp4.
Moviepy - Writing video test_videos_output/harder_challenge_video.mp4

                                                                
Moviepy - Done !
Moviepy - video ready test_videos_output/harder_challenge_video.mp4
CPU times: user 1h 34s, sys: 1min 21s, total: 1h 1min 56s
Wall time: 37min 58s
In [45]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(output))
Out[45]:

1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?

While testing on challenge video and harder challenge video, the problems encountered are mostly due to lighting condition, shadows and road conditions ( edge visible on road other than lane marking). Although HLS space works well for simple video, it activates noisy areas more. ( that's visible in video 2 and 3). I may try LAB color space which separates yellow color better.

The averaging of lane works well to smoothen the polynomial output. Harder challenge also poses a problem with very steep curves too. May be we need to fit higher polynomial to these steep curves

Also, still these algorithms relie a lot on lane being visible, video being taken from certain angle, light condition and things like there will be 2 spikes in histogram etc. There might be better way based off RNN or Instance Segmentation (https://arxiv.org/pdf/1802.05591.pdf)